Search Results: "Scott James Remnant"

19 June 2007

Scott James Remnant: Slippery Slopes

One of the most interesting thing about slippery slopes is how you never seem to be standing at the top of them, looking down. The slope seems fine at the top, and it’s only once you start down it that you realise this could end up with some broken limbs. When Ubuntu was formed, Debian were having a debate about how to treat GFDL documentation. It was their opinion that the GFDL was inherently non-free, and they’ve since taken steps to remove all such licensed documentation from their main distribution. We took a more pragmatic approach, and decided that it maintained the spirit of freedom, and thus we continue to this day to ship that documentation in our main distribution. A similar discussion resulted in the handling of data files such as graphics, icons, fonts, etc. We decided that such things didn’t necessarily need to ship with corresponding source code, as frequently they don’t have any such thing or when they do, it’s just as easy to modify the data file directly. The slope didn’t seem at all slippery back then. Then came the issue of firmware, binary blobs in the kernel which are uploaded into a flash (or similar) chip in the hardware. Could we distribute these? On one hand, these blobs have always existed, they just used to be in ROM in the hardware; the move to firmware doesn’t change that. On the other hand, they’re machine code and if we had the source, we could improve the hardware as well. And what if we didn’t distribute them? Our users would be stuck without being able to use some fairly (to them) critical parts of their computer. In the end, the argument that firmware isn’t inherently any less free on the disk than in the ROM won, so we opted to continue to ship it. Perhaps that slope is a bit slippery, but we’ve got a good foothold. Of course, at that point somebody notices the binary “Hardware Access Layer” in the Atheros WiFi card driver. It’s not firmware, it’s run on the host processor, and is separate to “comply with FCC law”. (The ipw3495 driver has a binary daemon that allegedly performs the same legal function). Again, if we don’t distribute that, a large section of laptop users will not be able to use their WiFi cards. A compromise was reached; because the driver is necessary we’d ship it, but in a special restricted component that makes it absolutely clear that it’s not completely free. Users could choose to remove that component and any packages from it, to keep their system untainted. Ok, foothold wasn’t as strong as we thought; tumbled a bit, but we’re definitely on solid ground now! That’s what we thought, anyway. Unfortunately it seems that there’s a point a little bit lower down the slope which has a fantastic vista. The views from there are just incredible, people are saying, much prettier than where we are now. The only trouble is that we’re not sure there’s a foothold down there, if we try for the better view, we could end up broken at the bottom. I’m talking, of course, about the NVIDIA binary X driver. (Some reports/blogs/etc. indicate we’re also considering the ATI fglrx driver, this isn’t true – that driver doesn’t support AIGLX, so it’s not being considered.) We’ve shipped this driver in our restricted section, but not enabled it by default. It’s been there for people who want it to switch on, if they know how, but the default driver has always been the free (albeit obfuscated) one in the Xorg distribution. The problem is that users do not need this driver, they can get decent enough 2D graphics support from the free(ish) driver. In the long term, they may even get decent 3D graphics support from the nouveau driver effort. What’s the problem then? Simple, other operating systems use the 3D GPU to make the desktop seriously beautiful. If Linux doesn’t catch up and do the same, then we’ll be considered obsolete again. And just to drive the point home, some of our Linux friends shipped similar support in their last releases. They don’t enable the NVIDIA binary driver, but this means that a large percentage of their user base can’t get the bling without manual hackery. We needed a way to catch up with both the commerical operating systems and other Linux distributions; we have a policy of not doing our own software development, but only packaging what others have developed, so the only way for us to get ahead was to package something that others wouldn’t. Which brings us back to the NVIDIA binary driver. If we install that by default, we’ll be bringing a 3D desktop to more people. And we’ll gain a step ahead of the other distributions. Will our users care? To be brutally honest, I think the answer is no! In fact, I suspect our users will largely love us for this decision. Most probably already install the NVIDIA driver anyway, because they think it’s better, or because (sadly, like me) they have a card combination not supported by the free one. Will this make any difference to the effort to get NVIDIA to free up the driver, or at least the specs? Sorry, but to be honest again, I don’t think it’ll make one little difference. Linux distributions have been refusing to install it for years, and yet NVIDIA haven’t budged in their position. Perhaps a new tactic is required. Maybe if we do install it, we’ll be more likely to be chosen by OEMs as we can actually support the hardware they install. Then later, we may be able to actually affect their decision as to what hardware they install, and maybe then NVIDIA will pay attention. Will this change the perception of Ubuntu in the Linux developer community? I’m not sure about this one, I think that those who already feel strongly about the distribution of binary drivers are probably already pretty grumpy at us distributing things like the Atheros and ipw3495 drivers. I suspect this will change the opinion of a lot of people who’ve been on the fence until now, probably equal in both directions. Will we be able to sleep at night? Despite all of the above, personally I still think that installing and using the nvidia driver by default, when the nv driver would do, is the wrong decision. If the nv driver doesn’t work, I’m willing to accept the nvidia driver being used; provided that there’s some message informing the user what’s happened, why it has happened, and which alternate graphics cards they can purchase if they aren’t willing to accept a non-free binary driver. If the nv driver is good enough for 2D, I would prefer that we instead disabled the 3D desktop effects for this group of users (by default). A similar message could explain why this is disabled, again which alternate graphics cards provide this by default, but also provide a button for the user to enable it if they wish. While we should make the correct moral decision for the defaults, we shouldn’t stand in the way of users who wish to make a different decision for themselves. Later as the nouveau driver becomes stable, we may be able to activate 3D support for nvidia users by default. I think that preserves our current foothold, we’d only activate it if there is no free alternative. Where there is, we’d be educating users about why they may wish to consider alternatives to NVIDIA in future, while at the same time not getting too much in their way if they want to see the better view down the slope. Unfortunately it’s not my decision, and I suspect that the lure of the bling will win out. With any luck, we’ll find a foothold there, and the fallout of doing so won’t be too bad. I’m just worried that once we compromise on this, we’ll start compromising on other things … would we replace Firefox with a non-free web browser that rendered web pages “better”? The slippery slope only gets steeper from here on …

Scott James Remnant: Upstart 0.3

For the last couple of months, both at the Ubuntu Developer Summit in Mountain View and on the #upstart IRC channel, we’ve been discussing the changes we want to make to upstart for the Feisty Fawn release of Ubuntu. This will ship with a version of upstart based on the 0.3 series (it may end up getting called 0.5 before release); the primary goal for this are to have an init system that is suitable for general standalone list in any Linux distribution. I’ll be giving a talk at linux.conf.au 2007 in Sydney with that aim, I hope to persuade at least one other major Linux distribution that it’s the right solution. A complete list of the specifications and bugs being targeted for the 0.3 release can be found in Launchpad. The rest of this post will introduce some of the shiniest new things. Writing Jobs Upstart takes care of starting, supervising and stopping daemons itself; unlike in the init script system where you have to write code to do that yourself, often using a helper like start-stop-daemon. All you need to is give the path to, and arguments for, the binary you wish to be started.
exec /usr/bin/dbus-daemon
Some jobs, especially quick tasks, will usually be written as shell scripts. To save having to write a separate file and invoke it, you can include shell script code directly in the job file instead of using the exec stanza.
script
    echo /usr/share/apport/apport > /proc/sys/kernel/crashdump-helper
end script
Usually it’s not sufficient to just start a binary and wish it well; you frequently need something to be run before it is started to prepare the system, and sometimes something after it terminates to clean up again. For these purposes, additional snippets of shell code can be given – to be run before the binary is started, and after it has finished. Unlike init scripts, these do not need to start or stop the daemon itself; that’s done automatically based on the exec stanza.
pre-start script
    mkdir -p /var/run/dbus
    chown messagebus:messagebus /var/run/dbus
end script
post-stop script
    rm -f /var/run/dbus/pid
end script
For consistency, executables may be specified with pre-start exec and post-start exec instead of shell scripts as above. It’s sometimes useful to be able to run something after the binary has been started; for example, you may wish to attempt to connect to the daemon to determine whether it is ready to serve requests. post-start script or post-start exec can be used to this.
post-start script
    # wait for listen on port 80
    while ! nc -q0 localhost 80 </dev/null >/dev/null 2>&1; do
        sleep 1;
    done
end script
It’s also useful to be able to notify a daemon that it may be about to be stopped, or delay it for a while. pre-stop script or pre-stop exec can be used for this.
pre-stop script
    # disable the queue, wait for it to become empty
    fooctl disable
    while fooq >/dev/null; do
        sleep 1
    done
end script
Events Events are now quite a bit more detailed than in previous versions; they’re still named with simple strings that are up to the system sending the event, but they can now include arguments and environment variables which are passed through to jobs being started or stopped as a result.
initctl emit network-interface-up eth0 -DIFADDR=00:11:D8:98:1B:37
This command will now output all of the effects of this event, and will not terminate until the event has been fully handled inside upstart. Events such as the above can be used by jobs that examine the event arguments and environment within their script:
start on network-interface-up
script
    [ $1 = lo ] && exit 0
    grep -q $IFADDR /etc/network/blacklist && exit 0
    # etc.
 end script
or matched directly in the start on and stop on stanzas:
start on block-device-added sda*
The events generated by job state changes have also changed. Previously both jobs and events shared the same namespace, which not only caused confusion but actually caused some problems when one accidentally named a job after an event. The two primary events generated are now simply called started and stopped; they inform you that a job is fully up and running, or fully shut down again. The name of the job is received as an argument to this event.
start on started dbus
The started event is not emitted until the post-start task (described above) has finished; so the post-start task can delay other jobs from starting because they can’t yet connect to the daemon. Likewise the stopped event is not emitted until after the post-stop task has finished. The other two events emitted by a job are special; they are the starting and stopping events. The reason they are special is that the job is not permitted to start or stop until the event has been handled. This means that if you have a task to perform when your database server is stopped, but before it’s actually terminated, it’s as simple as:
start on stopping mysql
exec /usr/bin/backup-db.py
MySQL won’t be terminated until the backup has finished. This is especially useful for daemons that depend on each other, for example HAL needs DBUS, it shouldn’t be started until DBUS is running and DBUS should not be stopped until HAL has been terminated. All the HAL job needs is:
start on started dbus
stop on stopping dbus
Likewise if tomcat is installed, Apache should not be started until tomcat is running; and tomcat should not be stopped until apache has been terminated. All the tomcat job needs is:
start on starting apache
stop on stopped apache
Failure Nothing goes smoothly all of the time, sometimes tasks the job runs will fail, or the daemon itself will die. As well as providing the ability for a crashed daemon to be automatically restarted, upstart ensured that other jobs are notified with a special failed argument to the stopping and stopped events.
start on stopped typo failed
script
    echo "typo failed again :-("   mail -s "typo failed" root
end script
And if any job started or stopped by an event fails, it’s possible to discover that the event itself failed.
start on network-interface-up/failed
States While tasks such as configuring a network interface, or checking and mounting a block device are usually performed as a result of events; services are more complicated. Services normally need to be running while the system is in a certain state, not just when a particular event occurs. Therefore upstart allows you to describe arbitrarily complex system states by referring to events that define their changes. For example, many services should be running only while the filesystem is mounted, and at least one network device is up. We have events to indicate the changes into and out of these dates, we just need to combine them:
from fhs-filesystem-mounted until fhs-filesystem-unmounted
and from network-up until network-down
The until operator defines a period between two events, the and operator ensures we’re within both of these periods. Perhaps we need to be running while any display manager is:
from started gdm until stopping gdm
or started kdm until stopping kdm
Or maybe we only want to be run if a network interface comes up before bind9 has been started:
on network-interface-up and from startup until started bind9
These “complex event configurations” can appear in any job file; and any job file itself can serve as a reference for other jobs. They will be started and stopped at the same time as the named job:
with apache
Omitting the exec or script stanza from a job file means that it simply defines a state that can serve as a reference for others. As such, the multiuser state is simply a job file that defines it. As an added bonus, these states can still have pre-start, post-stop, etc. scripts.

Scott James Remnant: Hiding arguments from ps

There are many articles on the Interwobble telling you how to set the process title on Linux; they all concentrate on the problem of placing an arbitrarily long string in argv[0] to report status information in the process list. But what if argv[0] is fine, and what you want to do is remove the following arguments from the process list? Perhaps they contain sensitive information, or perhaps they’re just likely to be surprising. A fictional example may look like this (I’ve added the $ to demonstrate a later problem):
11234 ?        S      0:00 some-program --username=foo --passphrase=bar$
How not to do it The first thing you’ll probably try is just setting those arguments in the argv list to NULL. For example, with code like this:
for (i = 1; i < argc; i++)
    argv[i] = NULL;
That won’t work because that array is just a list of pointers to the real area in which the arguments are stored, and is local to your main function. Modifying it has no effect what ps and similar see. The next thing you might try, as a C programmer, is setting the first character of each argument to a NULL byte. It’s not unreasonable to assume that these strings are likely to be NULL-terminated. You might use code like:
for (i = 1; i < argc; i++)
    strcpy (argv[i], "");
But that doesn’t seem to work either, in fact, what you get in the process list is something rather odd and unexpected!
    11234 ?        S      0:00 some-program  -username=foo  -passphrase=bar$
All that’s done is lost the first character of each argument. In frustration, you might try overwriting the entire argument with NULL bytes, e.g.
for (i = 1; i < argc; i++)
    memset (argv[i], '\0', strlen (argv[i]));
Checking the process list, that will have seemed to have worked…
    11234 ?        S      0:00 some-program                                $
On closer inspection though, it’s not perfect. You’ll notice that the line ends with spaces in place of where the arguments used to be. If the number or length of arguments you’re trying to hide is particularly long, you can end up with strange blank lines in ps output. Not ideal. Why this happens To understand why this happens, we need to understand how the kernel reports the command line to ps. ps reads the /proc/PID/cmdline file, with contains the entire command line as a single character array, with each argument terminated by a NULL byte. Looking at it with od -t c, we see something like:
0000000   s   o   m   e   -   p   r   o   g   r   a   m  \0   -   -   u
0000020   s   e   r   n   a   m   e   =   f   o   o  \0   -   -   p   a
0000040   s   s   p   h   r   a   s   e   =   b   a   r  \0
When we changed just the first character to a NULL byte, we ended up with:
0000000   s   o   m   e   -   p   r   o   g   r   a   m  \0  \0   -   u
0000020   s   e   r   n   a   m   e   =   f   o   o  \0  \0   -   p   a
0000040   s   s   p   h   r   a   s   e   =   b   a   r  \0
ps is just assuming that \0 represents any argument gap, so outputs an ordinary space character every time it encounters one. This is why the first character of each argument appeared to be replaced by a space, and the rest were still printed. When we overwrite the arguments, we ended up with:
0000000   s   o   m   e   -   p   r   o   g   r   a   m  \0  \0  \0  \0
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
Writing all those zeros didn’t change the length of the command line character array, so ps got all those NULLs and output a space character for each one, assuming it was an argument break. What we need to is decrease the length of this character array, so ps only sees the first argument. How to do it The kernel maintains internal pointers to the start and end of the arguments array in the process address space. When you ask for the cmdline file in /proc, it replies with all of the characters between these two positions and a length of the difference. The kernel doesn’t provide us with any method of moving this pointer, so we’d initially appear to be stuck. However there turns out to be a very nasty solution. In order to support processes changing their title, the kernel has to support the arguments possibly overrunning into the environment space. If it thinks this has happened, it concatenates the environment space onto the end of the buffer and returns the entire area with the length set to the length up to the first NULL byte. All we need to do is fool the kernel into thinking we’ve overrun into the environment space, it’ll then just call strlen on the arguments list and return only the first argument; exactly what we want! What technique does the kernel use for deciding whether or not the arguments have overrun into the environment space? It checks whether the final character in the argument space is NULL or not. Thus paradoxically, to hide the argument list from ps, we remove the NULL
if (argc > 1)  
    char *arg_end;
    arg_end = argv[argc-1] + strlen (argv[argc-1]);
    *arg_end = ' ';
 
The check against argc is necessary, as we don’t want to remove the NULL character on the end of argv[0] as that would cause the first environment variable to be inadvertently concatenated. Now when we look at ps, we see just the program name:
    11234 ?        S      0:00 some-program$
And even when we look in /proc/PID/cmdline, we only see the program name and no additional arguments.
0000000   s   o   m   e   -   p   r   o   g   r   a   m  \0

Scott James Remnant: Think about things

One of my pet dislikes are those people that pay lip-service to a particular problem, such as Accessibility, Internationalisation or Usability, without actually thinking about them. My favourite example can be found at the Trafford Centre in Manchester, England. Somebody there has clearly realised that accessibility is a problem, and ensured that every single sign has Braille added to it so that partially-sighted or blind people are able to read what they say. They’ve only been paying lip-service to the problem though, and not thinking about it. The evidence? The following sign, mounted directly on a door, with Braille on it: Do not stand directly in front of this door.

Scott James Remnant: Keyboards

I was really starting to worry that I wouldn’t be able to find another Cherry G80-3000 keyboard; even Cherry’s website didn’t list it anymore, instead only showing the vastly less clicky G81 range. Happily I’ve found a stockist. http://www.cherrykeyboardsrus.co.uk/

Scott James Remnant: Something for everybody

According to the current issue (#93) of Linux Format, Ubuntu 7.04 (“Feisty Fawn”) is “…a dull release for Ubuntu, leaving Fedora to storm ahead…” (p. 23) whilst “shaping up to be one of the most innovative Linux distro releases of the year.” (p. 38) Especially amusing for myself is that, with Upstart, they “seldom notice any difference in boot speed” (p. 42), yet “Ubuntu 7.04 boots up in record time, leaving other Linux distros in the dust.” (p. 22) (As anyone who’s ever read anything about Upstart will know, Ubuntu still uses the SysV-rc scripts so there should be no difference in speed at this point. Funnily enough, they identified the reason Ubuntu boots fast in the same issue; “Changing the /bin/sh symlink to point to Dash instead of Bash can significantly shorten boot times” (p. 33) – unfortunately they simultaneously claim that Dash is only “almost POSIX compliant”, without explaining why they think it isn’t.) In this modern world, the lack of any editorial direction or basic research into what’s being printed is quite refreshing.

Scott James Remnant: Ten Really Cool Things

Come see me at LugRadio Live I will be demonstrating Ten Really Cool Things that you can do with Linux; it should be a fun show!

15 March 2007

Martin F. Krafft: I want Jabra!

My Nokia 6230 and I decided to split ways, or rather someone decided it was time for us to break up. Given that I have all my contacts on the computer anyway, and the 6230 was nearing its end of life anyway, I skipped the part about being annoyed and hopped into the store yesterday to pick up a new subsidised phone in exchange for a contract extension that was due anyway. Remembering the chat with Scott James Remnant at last year's Debian Barbeque, I went for the Sony Ericsson W810i. I initially wanted to stick with Nokia because I own many power adapters already, but Nokia recently switched the plugs, so that was no longer a motivation. I did have a brief look at the brand new E65 (the first Nokia phone to look decent, says Hanspeter), but I seriously doubt that Nokia actually fixed the problems with the E60, considering that the from the 6230 via the 6230i to the 6233, Nokia did not fix any single one of the problems I've had with the 6230. So now I have this shiny W810i (with a white case) and I am struggling to use it, having grown accustomed to Nokia over the past years. The worst is that it won't play nice with my Jabra BT250 headset, which has treated me very well and which I'd much rather keep than replace. However, while I can speak over it, I cannot use any of the buttons on the headset, meaning I cannot receive calls without touching the phone. The phone thus thinks that I would much rather talk through the phone than the headset and I have to transfer the sound manually. So I am wondering: does anyone of you have a newer Jabra BT headset, such as the BT250v or the BT500v and can confirm that it works with Sony Ericsson phones? If so, please let me know! NP: Rage Against The Machine / Rage Against The Machine

11 January 2007

Russell Coker: ps and security

A post by Scott James Remnant describes how to hide command-line options from PS output. It's handy to know that but that post made one significant implication that I strongly disagree with. It said about command-line parameters "perhaps they contain sensitive information". If the parameters contain sensitive information then merely hiding them after the fact is not what you want to do as it exposes a race condition!

One option is for the process to receive it's sensitive data via a pipe (either piped from another process or from a named pipe that has restrictive permissions). Another option is to use SE Linux to control which processes may see the command-line options for the program in question.

In any case removing the data shortly after hostile parties have had a chance to see it is not the solution.

Apart from that it's a great post by Scott.

5 December 2006

Scott James Remnant: Before Upgrading to Edgy

It seems that some people with heavily customised Ubuntu installations have had problems upgrading from dapper to edgy. While we do test upgrades as much as we can, there’s no way to test every possible permutation, so problems do creep in. Here’s a checklist to perform before upgrading to minimise any problems you might have: If, after taking all of these precautions, your upgrade still fails; please file a bug report, and try to include as much information as possible. Provide the list of packages that failed, and if possible the error message provided by them. Provide /var/log/dpkg.log and the files in /var/log/dist-upgrade.

29 November 2006

MJ Ray: Firmware: Ubuntu's Choice

Scott James Remnant summarises the different path taken by Ubuntu about firmware as having set them on Slippery Slopes. As a reaction, FSF has backed an Ubuntu-based distribution which takes a more debian position about programs (but not manuals) called gNewSense Alex Valentine commented:
"Yet another FSF forking project, quite an innovation. Let's take an existing project that works fine and make less compatible, in the name of "freedom," and plop a bunch of Gnu/ GNU/labels/."
Ubuntu seems to be getting criticised a lot this week and there's enough people who don't think it "works fine" to start a distribution. JOOI, what else were FSF forking projects? I can think of several reimplementations, but not that many forks. thebluesgnr commented:
"An Ubuntu guy complaining about forking... unbelievable."

28 November 2006

Benjamin Mako Hill: Bring the Bling?

I'm been perplexed by the recent fracas around the possibility of Ubuntu shipping non-free drivers by default as part of the feisty release goal to bring the bling. The issue has been discussed by many people, most recently and eloquently in a blog post by Scott James Remnant. I have never been 100% comfortable with our (Ubuntu's) decision to ship proprietary drivers. The Ubuntu philosophy document -- which Mark Shuttleworth and I drafted originally -- says quite unambiguously:
Every computer user should have the freedom to run, copy, distribute, study, share, change and improve their software for any purpose, without paying licensing fees.
I believe this deeply. Consequently, I feel that binary drivers in Ubuntu are, and always have been, a failure to live up to our principles. If our philosophy document is still representative, most people in our community should be a little uncomfortable with them -- even if they think, as most of us do, that shipping them is the correct thing to do. Early on Ubuntu decided that binary firmware and hardware drivers would be distributed and quasi-supported in restricted. While not completely comfortable with the move, I could understand and defend it for three reasons:
  1. The drivers were easily removable and not enabled by default unless necessary to achieve functioning hardware.
  2. As part of this process, we promised to work with vendors to accelerate the freeing/open-sourcing of their drivers.
  3. Perhaps most importantly, I couldn't imagine not using them if it were the choice between non-free drivers or inoperable hardware.
The proposal to include proprietary nVidia driver in place of the existing free nVidia driver is important because it changes, or eliminates, the first and third justifications. Since the first issue has been discussed in depth, it seem appropriate to focus on the third and final point. The point is perhaps best introduced with an observation:
While I have met many people who research and buy hardware to avoid the need for non-free drivers, something I do myself, I have never met anyone who voluntarily chooses inoperable hardware on a computer they use over working hardware through a non-free driver.
While not airtight, this observation is precisely why the old policy of using non-free drivers only when no free drivers exist works so well. By effectively synonymizing use of the non-free driver with use of the hardware, the crisis created by non-free drivers is rendered paradoxical: anybody using a piece of hardware that only works with non-free drivers is, by definition, either using non-free drivers or not using the hardware. The only way to not use non-free drivers in situations where no free drivers exist is to not use hardware that requires them. If you use different hardware, you will not need to use non-free drivers. This important justification is completely inapplicable to the proposal to include non-free drivers in place of free (but not perfect) alternatives. Second, while shipping non-free drivers is wrong because we lose our freedom and violate our philosophy, the desire for basic hardware support is universal and important enough that it's defensible. One reason that the "binary by default for bling" situation alarms me is that I don't believe that the desire for desktop bling is as universal, or as strong, as the desire for working hardware. An analogy might be stealing food to feed your family and stealing a CD from the record store. Both are wrong because stealing is wrong, but I have a lot more sympathy for the hungry thief in the first example because CDs are not as necessary as food and because many people would rather go without a new CD than steal. We can ask: "Which is worse: selling your soul for a decent job or selling your soul for a PS3?" In an absolute sense, neither is worse; but it's easier to support the former than the latter. I'm very impressed with Beryl -- but I have trouble empathizing strongly with the need for a spinning desktop cube. New users may be impressed and attracted to an Ubuntu desktop powered by proprietary drivers. They might also be impressed by proprietary applications. However, it is self-defeating to attract new users to our free platform and principles by compromising our core values. Many users already make the decision to use non-free video drivers and Ubuntu makes it very easy to do so once they have made that choice. Installing binary drivers by default is choosing for the large number of users who will stick with any default. It may be a conservative stance but as long as we believe that a non-insignificant number of users would choose to live without bling before compromising Ubuntu's philosophy and goals, we should make the difficult decision to side with freedom by default. When discussing this issue, many people say, it's not my decision or it isn't my call. Regardless of whether anyone agrees with me and regardless of who the final decision is up to -- and it's not clear to me now -- I believe that the Ubuntu community has the power to inform and shape the development of their distribution. As the Ubuntu community, we can make our position heard. The best place to do that right now seems to be the Accelerated X specification comments page. One can also email the the Technical Board (for technical comments) and the Community Council (for principle or philosophy related issues).

Sven Mueller: Canonical and Debian - friend or foe?

Pointed at the issues by Josselin Mouette’s post, I got aware of a list of issues posted by JP Rosevear, which is a reply to Mark Shuttleworths post to the opensuse mailinglist. Note especially point 2, which is: “Preventing the Debian GNOME maintainer from updating GNOME packages until after Ubuntu LSO had shipped because you had hired him.” If this issue is true, it’s the worst thing I ever heard of regarding Canonical. I mean it’s bad enough that points 1 (“Having a stated policy of not funding any significant new software development because the Return on Investment is not good enough”) and 3 (“Not releasing any source code for launchpad/rosetta/malone to maintain a competitive advantage”) certainly are true (even though I can’t prove and therefor don’t 100% believe the second part of #3). But unless there is a very good reason to delay the submission of the patches developed to “upstream”, Debian.
It’s bad enough that Canonical hired the most active Debian Developers, depriving Debian of much of their time, while not really supporting Debian (i.e. while not actively submitting patches to Debian). It would have been better for the community if Canonical had hired less active DDs (or even non-DDs). Sure it would have been harder to select some sufficiently qualified people for the jobs, but the total outcome whould have been better IMHO (getting time from previously uninvolved people, while keeping the existing contributors). Anyway, I really would like to know wether the mentioned point #2 is true, and if so wether any valid reason for doing so could be given before I set my opinion about this issue in stone. One thing is sure however, I’m becoming more and more critical regarding Ubuntu and Canonical over time while I once hoped that the opposite would become true for those criticizing Canonical at that time. Actually I still hope so (and thus I hope reason to become less sceptical again).
Another thing is also sure: If Canonical wants to use Debian as a base for much longer, it should make damned sure to work with Debian more actively instead of seemingly working against it (more or less openly). Update: As also pointed out in a comment below, a post by Scott James Remnant proves point 1 from the the aforementioned list and gives a further hint that point 3 might be true. More precisely it says “we have a policy of not doing our own software development, but only packaging what others have developed”, which probably means “not doing our own major software development”, since - as a comment to that post also says - Canonical did some software development, like launchpad/rosetta/malone and some other relatively small projects.

27 October 2006

Scott James Remnant: Not That Edgy

Now that Ubuntu 6.10 (“The Edgy Eft”) has been released, we’re starting to see reviews of it; while largely positive, one common theme is that Edgy isn’t quite as edgy as people were expecting. Mark’s original announcement is certainly the likely reason for this expectation. In it he set the scene for a bold, brash, bleeding edge release to counter the boring dapper release. Unfortunately, the simple truth is that reality set in. When planning the release schedule for edgy, we realised that if we wanted to get back to our original six-monthly release schedule, we were only going to have four months in which to develop it. That’s still enough time to throw everything to the wind, and shove out a “release” at the last moment when the CD happens to be installable. It’d be edgy in the extreme sense. Unfortunately while exciting, we felt that such a release would ruin Ubuntu’s reputation. It’d be a release that, for all intents and purposes, would only be interesting to Ubuntu developers. Mark has already touched on this in his blog, citing a conversation he had with Matt (the Ubuntu CTO). Especially noteworthy is the mention that the kinds of itches that developers get are not the same as those users get. We get itches because the installer still relies on devfs-style paths, or because it’s not possible to boot the system without race-conditions. None of these things are noticeable to the end-user. We’re drawing up the list of topics to be discussed at UDS Mountain View in two weeks time, this is as good a guide as any for what we’re thinking about for feisty, the next release. At the end of that summit, we’ll have a list of approved specifications, assigned to developers throughout the community for implementation in the feisty schedule. Obviously some of those won’t make it due to time constraints, but the best thing about a six-monthly release cycle is that they’re not delayed for long.

Scott James Remnant: Having Left Debian

It’s been over a year now since my last proper upload to Debian, and nine months since I announced my intent to put aside working on Debian for a while. With Matthew Garrett’s resignation from Debian, several people have compared it to my own “resignation” from Debian. It has got my thinking about whether I currently intend to ever end my “Sabbatical” and return as an active Debian Developer. I think that the end of my love-affair with Debian started at Debconf last year where several developers treated those of us who also worked on Ubuntu quite rudely. Someone was attacked for wearing an Ubuntu t-shirt at the conference, while someone else was applauded for wearing a “Fuck Ubuntu” t-shirt. That’s where I realised that maybe I didn’t have as much in common with these people as I thought I did. I still don’t understand why Debian singles Ubuntu out for this kind of treatment, we’re still the only derviative distribution that makes all of our patches to Debian available, yet Ubuntu is claimed to not do anything at all. Another example is that Ubuntu is being asked to change the Maintainer field of every package, something no other derivative is being asked to do or has ever been asked to do. Martin Krafft has had some interesting things to say about this strange relationship in the past. If that was the start of my falling out with Debian, I think that Debian considering removing documentation and firmware from the distribution, especially the documentation, was another point I started wondering whether I shared anything in common with the project anymore. Call me strange, but I think that one of the fundamental purposes of a Linux distribution is to be useful to its users. If nobody can use the distribution because it doesn’t support their hardware, and even if it did, all the documentation has been stripped out; I started to wonder what it’s aims are. It became increasingly apparent that the only users Debian was considering a priority were its own developer. And the third thing is simply a matter of Fun. Fun for me, at the moment at least, has been to build a system that fits together extremely nicely with each component doing the right thing. For Ubuntu this has meant being a driving force for it being Linux 2.6 only, with reliance on udev for hardware, etc. Upstart is just a continuation of these goals, getting a system which all “just works”, even if it means throwing out a few things people were previously fond of. All of these things would have been impossible to do in Debian itself. Getting upstart installable required changes to twelve different packages, including sysvinit itself; at a worst case, this would have required the agreement of twelve different maintainers in Debian. It’s often exhausting just persuading one of the reason for the change, persuading a dozen would have been a herculian effort. Perhaps Matthew is right, what Debian lacks is a single leadership. There’s nobody in Debian I could have gone to for approval over the changes I wanted to make, whose decision developers cannot overrule. Ubuntu has such people (the Technical Board and sabdfl), which gives the project an obvious direction instead of a couple of hundred people all pulling in different directions. In a way, I don’t feel that I’ve left Debian. I feel like I was happily going along, only to realise the mob had gone in a different direction and with no easy way of rejoining the group.

Scott James Remnant: Upstart can now replace sysvinit

Today I reached another milestone in the development of upstart, the packages in universe can now replace the existing sysvinit package. Before trying this, make sure your installation is up to date as we’ve had to split out some parts of sysvinit into a new sysvutils package. If you’re up to date, and want to try it out, install the upstart and upstart-compat-sysv packages from universe. Note that the first reboot after you’ve installed the packages (from sysvinit to upstart) will be a little tricky … use reboot -f. If your system boots and shuts down normally, everything’s working just fine. Note that both will be somewhat more quiet than you’re used to, unless you have usplash running. Throughout the rest of this entry, I’ll try to answer some of the questions and comments that I’ve received since the last post. Events As I talked about previously, upstart is an event-based init daemon. Events are the primary force for having your services and tasks started and stopped at the appropriate time and in the appropriate order. So what are events and where do they come from? (Note that this part is under development, so may change in later releases). Events are just simple strings that may be sent by any process when something it is tracking the state of changes. They have no state or longevity, and if, when queued, they do not cause any job state changes, then they have no effect unless they are sent again. Jobs can list which events cause them to be started if they are not already running and which events cause them to be stopped if they are running. Multiple start and stop events may be listed, in which case the first to occur changes the job until the next one occurs. upstart itself generates the following system events: The shutdown tool included in the package also causes one of the following events to be sent once the “shutdown” event has been handled: Jobs also generate events whenever they change state, this is the primary source of events for ordering: And as mentioned, any other process on the system may send events through the control socket or just by using initctl trigger EVENT. For now this is just the event string, however it’s intended that the event may include other details including environment variables and even file descriptors. Typical example To clarify how it all hangs together, here’s an example (using fictional names) of how the tasks and events can be arranged to provide race-free mounting of filesystems. By breaking this job into these small tasks, we can see how the pieces fit together. Because everything is now done on events, there are no race conditions; we know that any filesystem listed in /etc/fstab will be checked and mounted. The only reason they wouldn’t be is if there’s an error of some kind, and that means you have larger problems anyway and the system administrator would have a shell to fix it. Of course, the moment they finish checking the filesystem and mount it, the boot process would carry on. There’s no reason that any of these events need to be generated by the upstart daemon itself, it can receive them from any other daemon on the system such as udev, acpid, etc. This keeps the focus of the init daemon narrow. A large part of the future development will be working out exactly what kinds of events we want init itself to generate, what kinds we want to come from elsewhere, and what the contents of an event can be. Getting Involved If you want to get involved with trying to nudge the direction of upstart development, you can join the upstart-devel mailing list at http://lists.netsplit.com/ Or if you just want to grab the source code, tarballs are published at http://people.ubuntu.com/\scott/software/upstart/ and the bzr archive is at http://bazaar.launchpad.net/\keybuk/upstart/main

Scott James Remnant: Upstart in Universe

upstart is a replacement for the init daemon, the process spawned by the kernel that is responsible for starting, supervising and stopping all other processes on the system. The existing daemon is based on the one found in UNIX System V, and is thus known as sysvinit. It separates jobs into different “run levels” and can either run a job when particular run levels are entered (e.g. /etc/init.d/rc 2) or continually during a particular run level (e.g. /sbin/getty). The /etc/init.d/rc script is also based on the System V one (and is in the sysv-rc package), it simply executes the stop then start scripts found in /etc/rcN.d (where N is the run level) in numerical order. Why change it? Running a fixed set of scripts, one after the other, in a particular order has served us reasonably well until now. However as Linux has got better and better at dealing with modern computing (arguably Linux’s removable device support is better than Windows’ now) this approach has begun to have problems. The old approach works as long as you can guarantee when in the boot sequence things are available, so you can place your init script after that point and know that it will work. Typical ordering requirements are: This worked ten years ago, why doesn’t it work now? The simple answer is that our computer has become far more flexible: We’ve been able to hack the existing system to make much of this possible, however the result is chock-full of race conditions and bugs. It was time to design a new system that can cope with all of these things without any problems. What we needed was an init system that could dynamically order the start up sequence based on the configuration and hardware found as it went along. Design of upstart upstart is an event-based init daemon; events generated by the system cause jobs to be started and running jobs to be stopped. Events can include things such as: In fact, any process on the system may send events to the init daemon over its control socket (subject to security restrictions, of course) so there is no limit. Each job has a life-cycle which is shown in the graph below: The two states shown in red (“waiting” and “running”) are rest states, normally we expect the job to remain in these states until an event comes in, at which point we need to take actual to get the job into the next state. The other states are temporary states; these allow a job to run shell script to prepare for the job itself to be run (“starting”) and clean up afterwards (“stopping”). For services that should be respawned if they terminate before an event that stops them is received, they may run shell script before the process is started again (“respawning”). Jobs leave a state because the process associated with them terminates (or gets killed) and move to the next appropriate state, following the green arrow if the job is to be started or the red arrow if it is to be stopped. When a script returns a non-zero exit status, or is killed, the job will always be stoped. When the main process terminates and the job should not be respawned, the job will also always be stopped. As already covered, events generated by the init daemon or received from other processes cause jobs to be started or stopped; also manual requests to start or stop a job may be received. The communication between the init daemon and other processes is bi-directional, so the status of jobs may be queries and even changes of state to all jobs be received. How does it differ from launchd? launchd is the replacement init system used in MacOS X developed as an “Open Source” project by Apple. For much of its life so far, the licence has actually been entirely non-free and thus it has only become recently interesting with the licence change. Much of the goal of both systems appears initially to be the same; they both start jobs based on system events, however the launchd system severly limits the events to only the following: Therefore it does not actually allow us to directly solve the problems we currently have; we couldn’t mount filesystems once the “filesystem checked” event has been recived, we couldn’t check filesystems when the block device is added and we certainly couldn’t start daemons once the complete filesystem (as described by /etc/fstab) is available and writable. The launchd model expects the job to “sit and wait” if it is unable to start, rather than provide a mechanism for the job to only be started when it doesn’t need to wait. Jobs that need /usr to be mounted would need to spin in a loop waiting for /usr to be available before continuing (or use a file in a tmpfs to indicate it’s available, and use that modification as the event). This is not especially surprising given that Apple have a high degree of control over both their hardware and the actual underlying operating system; they don’t need to deal with the wide array of different configurations that we have in the Linux world. Had the licence been sufficiently free at the point we began development of our own system, we would probably have extended launchd rather than implement our own. At the point Apple changed the licence, our own system was already more suitable for our purposes. How does it differ from initng? Initng by Jimmy Wennlund is another replacement init daemon intended to replace the sysvinit system used by Linux. It is a dependency-based system, where upstart is an event-based system. The notion of a dependency-based system is interesting to talk about at this point. Jobs declare dependencies on other jobs that need to happen before the job itself can be started. Starting the job causes its dependencies to be started first, and their dependencies, and so on. When jobs are stopped, if running jobs have no dependencies, they themselves can be stopped. It’s a neat solution to the problem of ordering a fixed boot sequence and the problem of keeping the number of running processes to a minimum needed. However this means that you need to have goals in mind when you boot the system, you need to have decided that you want gdm to be started in order for it, and its dependencies, to be started. Initng uses run levels to ensure this happens, where a run level is a list of goal jobs that should be running in that run level. It’s also not clear how the dependencies interact with the different types of job, a dependency on Apache would need the daemon to be running where a dependency on “checkroot” would need the script to have finished running. Upstart handles this by using different events (“apache running” vs. “checkroot stopping”). Again while interesting, Initng does not solve the problems that we wanted to solve. It can reorder a fixed set of jobs, but cannot dynamically determine the set of jobs needed for that particular boot. A typical example would be that if the only dependency on the job that configures networking is the mount network filesystems job, then should that job fail or notbe a goal (e.g. because there are no network filesystems to be mounted) the result is that network devices themselves will not be configured. You could make everything a goal, and just use the dependencies to determine the order, however this is less efficient than just ordering the existing sysv-rc scripts (which can be done at install time). Another example is that often you simply don’t know whether something is a dependency or not without reading other configuration, for example the mount network filesystems may be a dependency of everything under /usr or may just be a dependency of anything allowing the user to login if it just mounts /home. The difference in model can be summed up as “initng starts with a list of goals and works out how to get there, upstart starts with nothing and finds out where it gets to.” How does it differ from Solaris SMF? SMF is another approach to replacing init developed by Sun for the Solaris operating system. Like initng it’s a dependency-based system, so see above for the differences between those systems and upstart. SMF’s main focus is serive management; making sure that once services are running, they stay running, and allowing the system administrator to query and modify the states of jobs on the system. Upstart provides the same set of functionality in this regard, services are respawned when they fail and system administrators can at any time query the state of running services and adjust the state to their liking. Will it replace cron, inetd, etc? The goal of upstart is to replace those daemons, so that there is only one place (/etc/event.d) where system administrators need to configure when and how jobs should be run. In fact, the goal is that upstart should also replace the “run event scripts” functionality of any daemon on the system. Daemons such as acpid, apmd and Network Manager would send events to init instead of running scripts themselves with their own perculiar configuration and semantics. A system administrator who only wanted a particular daemon to be run while the computer was on AC power would simply need to edit /etc/event.d/daemon and change “on startup” to “on ac power”. What about compatibility? There’s a lot of systems administrators out there who have learned how Linux works already and will not want to learn again immediately, there’s also a large number of books that cover the existing software and won’t cover upstart for at least a couple of years. For this reason, compatibility is very important. upstart will continue to run the existing scripts for the forseeable future so that packages will not need to be updated until the author wants. Compatibility command-line tools that behave like their existing equivalents will also be implemented, a system administrator would never need to know that crontab -e is actually changing upstart jobs. Does it use D-BUS?
“To D-BUS people, every problem seems like a D-BUS problem.” —Erik Troan
The UNIX philosophy is that something should do just one job, and do it very well. upstart’s one job is starting, supervising and stopping other jobs; D-BUS’s one job is passing messages between other jobs. D-BUS does provide a mechanism for services to be activated when the first message is sent to them, thereby starting other jobs. Some people have taken this idea and extended it to suggest that all a replacement init system need do is register jobs with D-BUS and turn booting into a simple matter of message parsing. This seems wrong to me, D-BUS would need to be extended to supervise these services, provide means for them to be restarted and stopped; as well as deal with being process #1 which means cleaning up after children whose parent’s have died, etc. It seems far simpler to arrange for D-BUS to send an event to init when it needs a service to be started, and focus on being a very good message passing system. The IPC mechanism used by upstart is not currently D-BUS because of various problems, however it’s always been expected that even if init itself doesn’t communicate with D-BUS directly, there would be a D-BUS proxy that would ensure messages about all init jobs and events would be given to D-BUS and D-BUS clients could send messages to init to query and change the state of jobs. What is the implementation plan? Because this is process #1 we are changing, we want to make sure that we get it right. Therefore instead of releasing a fully-featured daemon and configuration to the world, we’re developing it in the following stages:
  1. Principal development; at the end of this stage the daemon has been implemented and can manage jobs as described.
  2. Replacement of /sbin/init while running the existing sysv-rc scripts. This is the shake-down test of the daemon, can it perform the same job as the existing sysvinit daemon without any regressions?
  3. /etc/rcS.d scripts replaced by upstart jobs. These consitute the majority of tasks for booting the system into at least single-user mode, and contain many of the current ordering problems and race conditions. If the daemon solves the problems here, it will be a success.
  4. Other daemon’s scripts replaced by upstart jobs on a package-by-package basis; this will be an ongoing effort during which upstart will continue running the existing sysv-rc scripts as well as its own jobs. During this time the event system may be tweaked to ensure it truly solves the problems we need.
  5. Replcement of cron, atd, anacron and inetd. This will happen alongside the above and result in a single place to configure system jobs.
  6. Modification of other daemons and processes to send events to init instead of trying to run things themselves.
The current plan is that we will be at least part of the way into stage #3 by the time edgy is released, with that release shipping with upstart as the init daemon and the most critical rcS scripts being run by it to correct the major problems For edgy+1 we hope to have completed stage #5 and be at least part of the way into the implementation of stage #6. From the start of development of edgy+2, no new packages will be accepted unless they provide upstart jobs instead of init scripts and init scripts will be considered deprecated. What state is it in now? The init daemon has been written and is able to manage jobs as described above, receiving events on the control socket to start and stop them. This has now been uploaded to the Ubuntu universe component in the upstart package for testing before it becomes the init daemon. We welcome any experienced users who want to help test this; install the package and follow the instructions in /usr/share/doc/upstart/README.Debian to add a boot option that will use upstart instead of init. If your system boots and shut downs normally (other than a slightly more verbose boot without usplash running) then it is working correctly. Other types of events will be added as required during development and testing. Currently only a basic client tool (initctl) has been written, compatibility tools such as shutdown will be written over the next week or two before it replaces our sysvinit package.

Scott James Remnant: Parallel Peer Programming

Here at Canonical Towers we have several staff who worship at the altar of Extreme Programming, and as such many of the methodologies and rituals prescribed by that religion find their way into our day-to-day working practices. A few of these came together in an interesting way a few weeks ago, and it was suggested by a cow-orker that I blog about it so he could give the URL to people. The first ritual is that of the sprint, I don’t think this is orthodox XP but rather something inspired from it that we picked up from the Python community. With all of the Canonical developers scattered across the globe mostly working from their own homes, it’s become a useful tool; particularly for the Launchpad team. The basic idea for those that haven’t heard of it is simple, and perhaps obvious; you get a selection of the team together in one place, sit them around the same table with particular goals to complete. In effect it’s highly compressed facetime and high bandwidth interaction to nail those tasks that need it, before you head off again and work at a more sedate pace. The second is directly from XP doctrine and is that peculiar observance known as pair programming, something that is often coupled with sprints. For anybody who hasn’t encountered this before, it’s something I’ve always found rather odd. You get two programmers together, both itching to code, and you take one of them’s laptop away; and you don’t just force them to fight over the keyboard either, the laptop-less soul isn’t allowed to touch it. The theory here is that it frees the deprived individual of the hassle of coding and allows them to direct the programmer in ways that may not be immediately obbious, or alternately think about the next bit of work that needs doing. Another interesting side-effect is it can be an interesting way to learn code you’ve not worked with before, if you’re the one doing the coding and you’re being guided what to do by a sage who already knows it. I have a funny story here, so I’m going to digress from the main stream for a moment to tell it. A few months ago I was working in the London office, Mark had been up all night coding and had then given me responsibility to get his changes landed onto the Launchpad mainline. I’m not that hot on quite a bit of Launchpad, and fade to utterly clueless the nearer the code gets to the web application itself; and when trying to merge the two there were conflicts which I needed Mark’s help to resolve. Now as anybody who’s met Mark knows, he’s a pretty good example of a Type-A personality, yet he suggested that we do a spot of pair programming to get the code in and I’d be the one with the laptop as it’d help me get to grips with the affected parts. This hilighted something I’ve always seen as a problem with pair programming, dealing with it when the a person with the keyboard has a totally different way of working to you. In particular, I’m an emacs user, whereas Mark is a die-hard vim user; but also right down to the working directory I would work from, how I test results, etc. When you’re the one without the laptop, this can be quite infuriating, as you know exactly how to solve a problem and the idiot with the keyboard is dithering about doing things you don’t understand just to get there. If you’re a Type-A personality, you generally snatch the laptop away at this point and do it yourself; or at least you would, if you could drive the strange editor the other person uses. I swear Mark was sulking as he gave the laptop back and let me do it. Anyway, there is some relevance to that and we’ll come to it in a moment. The third and final methodology is the concept of unit tests and more specifically test driven development. For anyone who hasn’t come across this before, I highly recommend it; I was suspicious too when Robert Collins took it upon himself to teach me the true way, but now I’m sold. Simply the idea is that if you have any code that doesn’t have another piece of code in a test suite that checks that it’s working correctly, that code is broken. More particularly unit testing involves checking just one feature or requirement at a time, and stacking them together to test all of the code paths. I’ve actually found that this improves the APIs I design, as I write the code in small blocks and functions to make them easier to test. Test driven development takes it even further, you write the unit tests first, before you write the code they’re supposed to test; usually one or two at a time. Obviously these tests will fail at first, it’s then your job to write as little code as possible to make them pass. If your code isn’t right, you add a unit test that will fail, and modify the code to make it pass. This really comes into its own when you’re fixing bugs later, once you’ve identified the bug you write a test case or two that cause the problem; these will of course fail. You can then modify the code to make them pass, and at the same time be sure you’ve not broken other functionality because of the existing test suite. I’ve also found it really useful for particularly complicated or tricky pieces of code, especially those intricate algorithms that do particularly heavy lifting. So onto the event itself, this was a sprint in our London office to indoctrinate Gustavo Niemeyer with the various projects he’d be working on. The goal for this sprint was decided to start the conversion of HCT to Bazaar-NG, and to do this Gustavo and I would pair program. Now my last experience of pair programming had been that story with Mark (see, it had some relevance) and I knew I’d be just as bad if I wasn’t allowed to have the keyboard. I was also acutely aware that if Gustavo was just sitting by and watching, he wouldn’t get much benefit from it either, so he needed to be actively involved in the coding so he could learn the things he’d be working on. I came to what I thought was a pretty neat, and obvious, solution to these problems. We set up my laptop with the code we’d be working on, and on my display were two side-by-side terminals both running screen sessions. Gustavo then set up two same-sized terminals on his, ssh’d into my laptop from both of them and joined the screen sessions. In the left-hand one he ran vi, and in the right-hand one I ran emacs. Thus we both had keyboards, and both had editors, yet could see each other working and even steal the keyboard without danger of violence. We didn’t just sit side-by-side and code on different things though, that’s not pair programming and is just ordinary programming with a bit of a voyeuristic twist. What we did was: in my terminal I started writing the test cases for the code we needed, it was Gustavo’s job to write the code that would make them pass. We actually added a third terminal in which we could run the test cases themselves; so I could run them when I’d added something that would fail, and Gustavo could run them when he thought he’d made them pass again. This turned out to be a rather fun way to work, and at one point I was almost able to try and convince myself that the code was writing itself to pass the test cases I was writing. It also got me thinking that it’d be really neat to use genetic algorithms to breed code to pass test cases.

Scott James Remnant: Apologies to Planet Readers

Apologies to anyone reading my blog on a Planet for the sudden dump of old posts from my blog. In fact, as it’s already been pointed out, I should apologise twice as much since I wrote Planet itself and am doubly-responsible for the flood. In order to satisfy the suggestions from the like of Jono Bacon and Matthew Revell and actually blog more about the things that I do, I decided it was time to try some new blog software. Until now I’ve been using PyBlosxom, and while it’s popular with the geek crowd, I’ve never really got on with it. The principal problem is that it’s a pain to actually add blog entries to it, or to manage those that you’ve added previously. I’m slightly ashamed to say that I wanted something with a web interface. Thom May pointed me at Typo which seems pretty neat so far, and it doesn’t require PHP which can only be a good thing! So now when I don’t post, I won’t be able to blame the tools. Damn. Maybe this wasn’t such a good idea, after all?

Scott James Remnant: What I want in edgy+1

Now that edgy has been frozen for the beta release in a week’s time, and the next developer summit, where the plans for the next release will be discussed, has been announced, I think it’s the perfect time to start thinking about the kinds of features we want to see in that release. At our recent sprint in Wiesbaden, Mark reminded us that when Ubuntu first released we were leading the field in many of the components we installed by default. Many users came to us because we provided Linux 2.6, hotplug/udev and Project Utopia by default; along with the latest GNOME release and development framework. Now everybody does that, butwe led the way. With the dapper release, we produced something that could be supported for a long time. With edgy we’ve scratched our itches and fixed things under the hood that have been causing us problems. edgy+1 seems a good time to integrate the hottest new technologies and lead the way again. So what do I think is exciting, and what do I want in edgy+1? telepathy, farsight and galago Anybody who has used the 2006 software release for the Nokia 770 Internet Tablet will have seen these in action already. They’re, in my opinion, some of the most exciting projects currently being worked on; and ripe for integration into Ubuntu. Telepathy is a communication framework built on DBus that allows any application to establish communication with users all around the world, using any protocol from IRC to MSN to VoIP. It uses Farsight to provide the codecs and protocol support on top of the (GStreamer)[http://gstreamer.freedesktop.org/] media framework. And Galago glues it all together; providing presence information about contacts and allowing users to specify their own status. Galago Presence Applet This means that you’ll be able to contact your contacts. Your calendar alarm tells you that it’s a colleague’s birthday, you’d be able to click their name and see a selection of different ways that you can contact that person right now. From e-mail, opening an MSN or Jabber window or even establishing a VoIP or SIP call to them. The most exciting thing is that because every application on the desktop has easy access to this, people are coming up with fantastic ideas for using it! zeroconf By this, I mean zero configuration networking. Something the OLPC project appears to be going to do very well indeed. Where no existing network infrastructure exists, your computer should still be able to communicate with other nearby computers. It should use ad-hoc wireless networks, link-local IP addresses and server-free DNS resolution with libnss-mdns. Play games with your friends on the train without touching a button. Chat with other people in a lecture hall or at a conference where the existing network is unavailable. Hassle-free home and office networking without any complicated set up. Networking that “just works”. avahi Whether your computer is on an infrastructure or ad-hoc network, it should be able to locate resources on the network and share its own to other people. Avahi allows applications to, via DBus, do this by using a mulicast form of DNS; compatible with Apple’s Bonjour protocol. The immediately obvious benefits of this are that one can locate such things as file servers without any complicated directory; more fun things can be done too, such as sharing your music from RhythmBox with other users on the network. Combine it with telepathy, etc. and you can establish communication with anyone else, with a wide range of different protocols. The immediate problem here is that we currently have a “no open ports” policy, which if we’re going to give the users the opportunity to establish ad-hoc networks is almost certainly a very good thing. We already have exceptions for that policy for DNS and DHCP, it may be that Avahi (mDNS) joins that list. Another option is a “mobile-phone like” interface for enabling and disabling discovery and sharing. Bluetooth I don’t think we should stop discovery and sharing at 802.* networking either. Linux actually has a pretty decent bluetooth stack, it’s something that we should start taking advantage of. Your computer should be able to discover paired or known bluetooth devices when in range without intervention and communicate with them. If I attempt to establish a network connection, and my phone is nearby, that should be offered as a method for getting on the Internet. If possible, it should also be offered through telepathy as a method for contacting people in my address book! Synchronisation These technologies together give us a really big possibility, automatic synchronisation of user’s information. When I bring my laptop home, it should automatically update my desktop with any changes I’ve made to my address book or calendar, and any shared files, etc. Likewise it should be automatically be updated with any changes made on the network, to a company address book on a server, for example. And why stop there? Don’t use share your music, synchronise it so that your laptop, desktop and iPod have the same selection available at all times. Hassle-free backups is an obvious win here. But we’re stopping again. Why is my mobile phone, and PDA excluded from this? Via bluetooth, my address book and calendar should be automatically updated from these and these should be automatically updated themselves. If I add a new number to my phone’s memory, it should appear in my computer address book; and my computer should offer me the ability to contact that person (using VoIP, for example).

Next.

Previous.